6,212 research outputs found
Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples
This paper presents a novel power spectral density estimation technique for
band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The
technique employs multi-coset sampling and incorporates the advantages of
compressed sensing (CS) when the power spectrum is sparse, but applies to
sparse and nonsparse power spectra alike. The estimates are consistent
piecewise constant approximations whose resolutions (width of the piecewise
constant segments) are controlled by the periodicity of the multi-coset
sampling. We show that compressive estimates exhibit better tradeoffs among the
estimator's resolution, system complexity, and average sampling rate compared
to their noncompressive counterparts. For suitable sampling patterns,
noncompressive estimates are obtained as least squares solutions. Because of
the non-negativity of power spectra, compressive estimates can be computed by
seeking non-negative least squares solutions (provided appropriate sampling
patterns exist) instead of using standard CS recovery algorithms. This
flexibility suggests a reduction in computational overhead for systems
estimating both sparse and nonsparse power spectra because one algorithm can be
used to compute both compressive and noncompressive estimates.Comment: 26 pages, single spaced, 9 figure
Non-disturbing quantum measurements
We consider pairs of quantum observables (POVMs) and analyze the relation
between the notions of non-disturbance, joint measurability and commutativity.
We specify conditions under which these properties coincide or
differ---depending for instance on the interplay between the number of outcomes
and the Hilbert space dimension or on algebraic properties of the effect
operators. We also show that (non-)disturbance is in general not a symmetric
relation and that it can be decided and quantified by means of a semidefinite
program.Comment: Minor corrections in v
Equivariant imaging: Learning beyond the range space
In various imaging problems, we only have access to compressed measurements
of the underlying signals, hindering most learning-based strategies which
usually require pairs of signals and associated measurements for training.
Learning only from compressed measurements is impossible in general, as the
compressed observations do not contain information outside the range of the
forward sensing operator. We propose a new end-to-end self-supervised framework
that overcomes this limitation by exploiting the equivariances present in
natural signals. Our proposed learning strategy performs as well as fully
supervised methods. Experiments demonstrate the potential of this framework on
inverse problems including sparse-view X-ray computed tomography on real
clinical data and image inpainting on natural images. Code will be released.Comment: Technical repor
Sensing Theorems for Unsupervised Learning in Linear Inverse Problems
International audienceSolving an ill-posed linear inverse problem requires knowledge about the underlying signal model. In many applications, this model is a priori unknown and has to be learned from data. However, it is impossible to learn the model using observations obtained via a single incomplete measurement operator, as there is no information about the signal model in the nullspace of the operator, resulting in a chicken-and-egg problem: to learn the model we need reconstructed signals, but to reconstruct the signals we need to know the model. Two ways to overcome this limitation are using multiple measurement operators or assuming that the signal model is invariant to a certain group action. In this paper, we present necessary and sufficient sensing conditions for learning the signal model from measurement data alone which only depend on the dimension of the model and the number of operators or properties of the group action that the model is invariant to. As our results are agnostic of the learning algorithm, they shed light into the fundamental limitations of learning from incomplete data and have implications in a wide range set of practical algorithms, such as dictionary learning, matrix completion and deep neural networks
A Sketching Framework for Reduced Data Transfer in Photon Counting Lidar
Single-photon lidar has become a prominent tool for depth imaging in recent
years. At the core of the technique, the depth of a target is measured by
constructing a histogram of time delays between emitted light pulses and
detected photon arrivals. A major data processing bottleneck arises on the
device when either the number of photons per pixel is large or the resolution
of the time stamp is fine, as both the space requirement and the complexity of
the image reconstruction algorithms scale with these parameters. We solve this
limiting bottleneck of existing lidar techniques by sampling the characteristic
function of the time of flight (ToF) model to build a compressive statistic, a
so-called sketch of the time delay distribution, which is sufficient to infer
the spatial distance and intensity of the object. The size of the sketch scales
with the degrees of freedom of the ToF model (number of objects) and not,
fundamentally, with the number of photons or the time stamp resolution.
Moreover, the sketch is highly amenable for on-chip online processing. We show
theoretically that the loss of information for compression is controlled and
the mean squared error of the inference quickly converges towards the optimal
Cram\'er-Rao bound (i.e. no loss of information) for modest sketch sizes. The
proposed compressed single-photon lidar framework is tested and evaluated on
real life datasets of complex scenes where it is shown that a compression rate
of up-to 150 is achievable in practice without sacrificing the overall
resolution of the reconstructed image.Comment: 16 pages, 20 figure
- …